158 research outputs found

    Human Computer Music Performance

    Get PDF
    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synthesis, and sound diffusion. Thus, HCMP is an ideal framework to motivate and integrate advanced music research. In addition, HCMP has the potential to benefit millions of practicing musicians, both amateurs and professionals alike. The vision of HCMP, the problems that must be solved, and some recent progress are presented

    Learning an Orchestra Conductor's Technique Using a Wearable Sensor Platform

    Get PDF
    Our study focuses on finding new input devices for a system allowing users with any skill to configure and conduct a virtual orchestra in real-time. As a first step, we conducted a user study to learn more about the interaction between a conductor's gestures and the orchestra 's reaction. During an orchestra rehearsal session, we observed a conductor's timing and gestures using the eWatch, a wrist-worn wearable computer and sensor platform. The gestures are analyzed and compared to the music of the orchestra

    Languages for Computer Music

    Get PDF
    Specialized languages for computer music have long been an important area of research in this community. Computer music languages have enabled composers who are not software engineers to nevertheless use computers effectively. While powerful general-purpose programming languages can be used for music tasks, experience has shown that time plays a special role in music computation, and languages that embrace musical time are especially expressive for many musical tasks. Time is expressed in procedural languages through schedulers and abstractions of beats, duration and tempo. Functional languages have been extended with temporal semantics, and object-oriented languages are often used to model stream-based computation of audio. This article considers models of computation that are especially important for music programming, how these models are supported in programming languages, and how this leads to expressive and efficient programs. Concrete examples are drawn from some of the most widely used music programming languages

    Motif-Centric Representation Learning for Symbolic Music

    Full text link
    Music motif, as a conceptual building block of composition, is crucial for music structure analysis and automatic composition. While human listeners can identify motifs easily, existing computational models fall short in representing motifs and their developments. The reason is that the nature of motifs is implicit, and the diversity of motif variations extends beyond simple repetitions and modulations. In this study, we aim to learn the implicit relationship between motifs and their variations via representation learning, using the Siamese network architecture and a pretraining and fine-tuning pipeline. A regularization-based method, VICReg, is adopted for pretraining, while contrastive learning is used for fine-tuning. Experimental results on a retrieval-based task show that these two methods complement each other, yielding an improvement of 12.6% in the area under the precision-recall curve. Lastly, we visualize the acquired motif representations, offering an intuitive comprehension of the overall structure of a music piece. As far as we know, this work marks a noteworthy step forward in computational modeling of music motifs. We believe that this work lays the foundations for future applications of motifs in automatic music composition and music information retrieval

    Sundcool: collaborative creativity at a distance

    Full text link
    [ES] Creatividad audiovisual en tiempo real, colaborativa y a distancia. Con estas pocas palabras se resumen las principales características que el proyecto Soundcool ha alcanzado integrar en su sistema informático, transformando el ¿aquí y en este momento¿ en ¿en cualquier sitio en este momento¿.[EN] Real time audiovisual creation, collaborative and at a distance. These few words summarize the main characteristics that the Soundcool project has managed to integrate into its software system, transforming the ¿here and now¿ into ¿anywhere and now¿.Scarani, S.; Lloret Romero, MN.; Sastre, J.; Dannenberg, RB. (2021). Soundcool: creatividad colaborativa a distancia. Tsantsa: Revista de Investigaciones Artísticas (Online). (12):63-75. https://doi.org/10.18537/tria.12.01.0763751

    A Human-Computer Duet System for Music Performance

    Full text link
    Virtual musicians have become a remarkable phenomenon in the contemporary multimedia arts. However, most of the virtual musicians nowadays have not been endowed with abilities to create their own behaviors, or to perform music with human musicians. In this paper, we firstly create a virtual violinist, who can collaborate with a human pianist to perform chamber music automatically without any intervention. The system incorporates the techniques from various fields, including real-time music tracking, pose estimation, and body movement generation. In our system, the virtual musician's behavior is generated based on the given music audio alone, and such a system results in a low-cost, efficient and scalable way to produce human and virtual musicians' co-performance. The proposed system has been validated in public concerts. Objective quality assessment approaches and possible ways to systematically improve the system are also discussed
    corecore